182 research outputs found
Research on vibration characteristics of gear-coupled multi-shaft rotor-bearing systems under the excitation of unbalance
To find out the effect of eccentricity of a gear wheel on inherent characteristics of a gear-rotor system, this paper establishes a pair of general transverse-rotational-axial-swinging multi degrees of freedom coupling helical gear meshing dynamic model based on the finite element method (FEM). Considering the influence of the azimuth, the meshing angle, the helix angle and the rotation direction of driving shaft on mesh stiffness matrix, it analyzes the effect of mesh stiffness and mesh damping on the inherent characteristics and the transient response of the system. It obtains the displacement response curve and the dynamic meshing force curve of all nodes responding to the incentives of static transmission error and unbalance while considering mesh damping. It concludes that the effects of gear coupling and eccentricity of gear wheel should be taken into account in a multi-parallel-shaft gear meshing rotor system
CAP-VSTNet: Content Affinity Preserved Versatile Style Transfer
Content affinity loss including feature and pixel affinity is a main problem
which leads to artifacts in photorealistic and video style transfer. This paper
proposes a new framework named CAP-VSTNet, which consists of a new reversible
residual network and an unbiased linear transform module, for versatile style
transfer. This reversible residual network can not only preserve content
affinity but not introduce redundant information as traditional reversible
networks, and hence facilitate better stylization. Empowered by Matting
Laplacian training loss which can address the pixel affinity loss problem led
by the linear transform, the proposed framework is applicable and effective on
versatile style transfer. Extensive experiments show that CAP-VSTNet can
produce better qualitative and quantitative results in comparison with the
state-of-the-art methods.Comment: CVPR 202
Attention-based Multi-modal Fusion Network for Semantic Scene Completion
This paper presents an end-to-end 3D convolutional network named
attention-based multi-modal fusion network (AMFNet) for the semantic scene
completion (SSC) task of inferring the occupancy and semantic labels of a
volumetric 3D scene from single-view RGB-D images. Compared with previous
methods which use only the semantic features extracted from RGB-D images, the
proposed AMFNet learns to perform effective 3D scene completion and semantic
segmentation simultaneously via leveraging the experience of inferring 2D
semantic segmentation from RGB-D images as well as the reliable depth cues in
spatial dimension. It is achieved by employing a multi-modal fusion
architecture boosted from 2D semantic segmentation and a 3D semantic completion
network empowered by residual attention blocks. We validate our method on both
the synthetic SUNCG-RGBD dataset and the real NYUv2 dataset and the results
show that our method respectively achieves the gains of 2.5% and 2.6% on the
synthetic SUNCG-RGBD dataset and the real NYUv2 dataset against the
state-of-the-art method.Comment: Accepted by AAAI 202
- …